Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Brain Commun ; 6(1): fcae024, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38370445

RESUMEN

Individuals with post-stroke aphasia tend to recover their language to some extent; however, it remains challenging to reliably predict the nature and extent of recovery that will occur in the long term. The aim of this study was to quantitatively predict language outcomes in the first year of recovery from aphasia across multiple domains of language and at multiple timepoints post-stroke. We recruited 217 patients with aphasia following acute left hemisphere ischaemic or haemorrhagic stroke and evaluated their speech and language function using the Quick Aphasia Battery acutely and then acquired longitudinal follow-up data at up to three timepoints post-stroke: 1 month (n = 102), 3 months (n = 98) and 1 year (n = 74). We used support vector regression to predict language outcomes at each timepoint using acute clinical imaging data, demographic variables and initial aphasia severity as input. We found that ∼60% of the variance in long-term (1 year) aphasia severity could be predicted using these models, with detailed information about lesion location importantly contributing to these predictions. Predictions at the 1- and 3-month timepoints were somewhat less accurate based on lesion location alone, but reached comparable accuracy to predictions at the 1-year timepoint when initial aphasia severity was included in the models. Specific subdomains of language besides overall severity were predicted with varying but often similar degrees of accuracy. Our findings demonstrate the feasibility of using support vector regression models with leave-one-out cross-validation to make personalized predictions about long-term recovery from aphasia and provide a valuable neuroanatomical baseline upon which to build future models incorporating information beyond neuroanatomical and demographic predictors.

2.
Cortex ; 173: 96-119, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38387377

RESUMEN

Word deafness is a rare neurological disorder often observed following bilateral damage to superior temporal cortex and canonically defined as an auditory modality-specific deficit in word comprehension. The extent to which word deafness is dissociable from aphasia remains unclear given its heterogeneous presentation, and some have consequently posited that word deafness instead represents a stage in recovery from aphasia, where auditory and linguistic processing are affected to varying degrees and improve at differing rates. Here, we report a case of an individual (Mr. C) with bilateral temporal lobe lesions whose presentation evolved from a severe aphasia to an atypical form of word deafness, where auditory linguistic processing was impaired at the sentence level and beyond. We first reconstructed in detail Mr. C's stroke recovery through medical record review and supplemental interviewing. Then, using behavioral testing and multimodal neuroimaging, we documented a predominant auditory linguistic deficit in sentence and narrative comprehension-with markedly reduced behavioral performance and absent brain activation in the language network in the spoken modality exclusively. In contrast, Mr. C displayed near-unimpaired behavioral performance and robust brain activations in the language network for the linguistic processing of words, irrespective of modality. We argue that these findings not only support the view of word deafness as a stage in aphasia recovery but also further instantiate the important role of left superior temporal cortex in auditory linguistic processing.


Asunto(s)
Afasia , Sordera , Trastornos del Desarrollo del Lenguaje , Accidente Cerebrovascular , Humanos , Pruebas Neuropsicológicas , Afasia/etiología , Accidente Cerebrovascular/complicaciones , Lóbulo Temporal/patología , Percepción Auditiva
3.
Perspect ASHA Spec Interest Groups ; 7(5): 1-11, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36311052

RESUMEN

Purpose: Community aphasia groups serve an important purpose in enhancing the quality of life and psychosocial well-being of individuals with chronic aphasia. Here, we describe the Aphasia Group of Middle Tennessee, a community aphasia group with a 17-year (and continuing) history, housed within Vanderbilt University Medical Center in Nashville, Tennessee. Method: We describe in detail the history, philosophy, design, curriculum, and facilitation model of this group. We also present both quantitative and qualitative outcomes from group members and their loved ones. Results: Group members and their loved ones alike indicated highly positive assessments of the format and value of the Aphasia Group of Middle Tennessee. Conclusion: By characterizing in detail the successful Aphasia Group of Middle Tennessee, we hope this can serve as a model for clinicians interested in starting their own community aphasia groups, in addition to reaching individuals living with chronic aphasia and their loved ones through the accessible and aphasia-friendly materials provided with this clinical focus article.

4.
Neurosci Biobehav Rev ; 136: 104588, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35259422

RESUMEN

We conducted a systematic review and meta-analysis of 30 functional magnetic resonance imaging studies investigating processing of musical rhythms in neurotypical adults. First, we identified a general network for musical rhythm, encompassing all relevant sensory and motor processes (Beat-based, rest baseline, 12 contrasts) which revealed a large network involving auditory and motor regions. This network included the bilateral superior temporal cortices, supplementary motor area (SMA), putamen, and cerebellum. Second, we identified more precise loci for beat-based musical rhythms (Beat-based, audio-motor control, 8 contrasts) in the bilateral putamen. Third, we identified regions modulated by beat based rhythmic complexity (Complexity, 16 contrasts) which included the bilateral SMA-proper/pre-SMA, cerebellum, inferior parietal regions, and right temporal areas. This meta-analysis suggests that musical rhythm is largely represented in a bilateral cortico-subcortical network. Our findings align with existing theoretical frameworks about auditory-motor coupling to a musical beat and provide a foundation for studying how the neural bases of musical rhythm may overlap with other cognitive domains.


Asunto(s)
Música , Adulto , Percepción Auditiva , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Neuroimagen Funcional , Humanos , Imagen por Resonancia Magnética
5.
Artículo en Inglés | MEDLINE | ID: mdl-33419711

RESUMEN

BACKGROUND: Williams syndrome (WS) is a neurodevelopmental disorder characterized by hypersociability, heightened auditory sensitivities, attention deficits, and strong musical interests despite differences in musical skills. Behavioral studies have reported that individuals with WS exhibit variable beat and rhythm perception skills. METHODS: We sought to investigate the neural basis of beat tracking in individuals with WS using electroencephalography. Twenty-seven adults with WS and 16 age-matched, typically developing control subjects passively listened to musical rhythms with accents on either the first or second tone of the repeating pattern, leading to distinct beat percepts. RESULTS: Consistent with the role of beta and gamma oscillations in rhythm processing, individuals with WS and typically developing control subjects showed strong evoked neural activity in both the beta (13-30 Hz) and gamma (31-55 Hz) frequency bands in response to beat onsets. This neural response was somewhat more distributed across the scalp for individuals with WS. Compared with typically developing control subjects, individuals with WS exhibited significantly greater amplitude of auditory evoked potentials (P1-N1-P2 complex) and modulations in evoked alpha (8-12 Hz) activity, reflective of sensory and attentional processes. Individuals with WS also exhibited markedly stable neural responses over the course of the experiment, and these responses were significantly more stable than those of control subjects. CONCLUSIONS: These results provide neurophysiological evidence for dynamic beat tracking in WS and coincide with the atypical auditory phenotype and attentional difficulties seen in this population.


Asunto(s)
Síndrome de Williams , Humanos , Potenciales Evocados Auditivos/fisiología , Percepción Auditiva/fisiología , Electroencefalografía , Neurofisiología
6.
Behav Brain Sci ; 44: e101, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588066

RESUMEN

Our commentary addresses how two neurodevelopmental disorders, Williams syndrome and autism spectrum disorder, provide novel insights into the credible signaling and music and social bonding hypotheses presented in the two target articles. We suggest that these neurodevelopmental disorders, characterized by atypical social communication, allow us to test hypotheses about music, social bonding, and their underlying neurobiology.


Asunto(s)
Trastorno del Espectro Autista , Música , Trastornos del Neurodesarrollo , Síndrome de Williams , Atención , Humanos
7.
Aphasiology ; 33(4): 382-404, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31031508

RESUMEN

BACKGROUND: Previous work has investigated extensively the neuroanatomical correlates of lexical retrieval for words for concrete entities. Musical entities, such as musical instruments, are often included in studies of category-specific naming deficits, but have rarely been the focus of such work. AIMS: This article reviews a program of research investigating the neuroanatomical basis for lexical retrieval of words for unique (i.e., melodies) and non-unique (i.e., musical instruments) musical entities. MAIN CONTRIBUTION: We begin by reporting findings on the retrieval of words for unique musical entities, including musical melodies. We then consider work focusing on retrieval of words for non-unique musical entities, specifically musical instruments. We highlight similarities between the two lines of work, and then report results from new analyses including direct comparisons between the two. These comparisons suggest that impairments in naming musical melodies and in naming musical instruments are both associated with damage to the left temporal pole (LTP). However, musical instrument naming appears to rely on a more distributed set of brain regions, possibly including those relating to sensorimotor interactions with such instruments, whereas melody naming relies more exclusively on the left temporal pole. CONCLUSIONS: Retrieval of names for musical melodies appears to rely on similar neuroanatomical correlates as for other proper nouns, namely the LTP. Musical instrument naming seems to rely on a broader network of regions, including the LTP and sensorimotor areas. Overall, melody naming seems to coincide with naming of other proper nouns, while musical instrument naming appears distinct from other categories of non-unique items.

8.
Psychon Bull Rev ; 26(2): 583-590, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30238294

RESUMEN

While many techniques are known to music creators, the technique of repetition is one of the most commonly deployed. The mechanism by which repetition is effective as a music-making tool, however, is unknown. Building on the speech-to-song illusion (Deutsch, Henthorn, & Lapidis in Journal of the Acoustical Society of America, 129(4), 2245-2252, 2011), we explore a phenomenon in which perception of musical attributes are elicited from repeated, or "looped," auditory material usually perceived as nonmusical such as speech and environmental sounds. We assessed whether this effect holds true for speech stimuli of different lengths; nonspeech sounds (water dripping); and speech signals decomposed into their rhythmic and spectral components. Participants listened to looped stimuli (from 700 to 4,000 ms) and provided continuous as well as discrete perceptual ratings. We show that the regularizing effect of repetition generalizes to nonspeech auditory material and is strongest for shorter clip lengths in the speech and environmental cases. We also find that deconstructed pitch and rhythmic speech components independently elicit a regularizing effect, though the effect across segment duration is different than that for intact speech and environmental sounds. Taken together, these experiments suggest repetition may invoke active internal mechanisms that bias perception toward musical structure.


Asunto(s)
Percepción Auditiva , Música/psicología , Habla , Estimulación Acústica , Adolescente , Adulto , Femenino , Humanos , Masculino , Reconocimiento en Psicología , Sonido , Factores de Tiempo , Adulto Joven
9.
J Exp Psychol Gen ; 147(10): 1531-1543, 2018 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-30010370

RESUMEN

In recent years, psychological models of perception have undergone reevaluation due to a broadening of focus toward understanding not only how observers perceive stimuli but also how they subjectively evaluate stimuli. Here, we investigated the time course of such aesthetic evaluations using a gating paradigm. In a series of experiments, participants heard excerpts of classical, jazz, and electronica music. Excerpts were of different durations (250 ms, 500 ms, 750 ms, 1,000 ms, 2,000 ms, 10,000 ms) or note values (eighth note, quarter note, half note, dotted-half note, whole note, and entire 10,000 ms excerpt). After each excerpt, participants rated how much they liked the excerpt on a 9-point Likert scale. In Experiment 1, listeners made accurate aesthetic judgments within 750 ms for classical and jazz pieces, while electronic pieces were judged within 500 ms. When translated into note values (Experiment 2), electronica and jazz clips were judged more quickly than classical. In Experiment 3, we manipulated the familiarity of the musical excerpts. Unfamiliar clips were judged more quickly (500 ms) than familiar clips (750 ms), but there was overall higher accuracy for familiar pieces. Finally, we investigated listeners' aesthetic judgments continuously over the time course of more naturalistic (60 s) excerpts: Within 3 s, listeners' judgments differed between most- and least-liked pieces. We suggest that such rapid aesthetic judgments represent initial gut-level decisions that are made quickly, but that even these initial judgments are influenced by characteristics such as genre and familiarity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Asunto(s)
Afecto/fisiología , Percepción Auditiva/fisiología , Estética , Juicio/fisiología , Música , Reconocimiento en Psicología/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
10.
J Commun Disord ; 75: 72-86, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30031236

RESUMEN

Aphasia, an acquired language disorder resulting from brain damage, affects over one million individuals in the United States alone. Many persons with aphasia (PWA), particularly those with non-fluent aphasia, have been observed to be able to sing the lyrics of songs more easily than they can speak the same words. Remarkably, even humming a melody can facilitate speech output in PWA, and this has been capitalized on in therapies such as Melodic Intonation Therapy. The current study examined PWA's ability to complete phrases from songs by either singing, speaking, or intoning them in an experimental stem-completion format. Twenty PWA of varying severity, all but one of whom had aphasia as a result of stroke, and 20 age-matched healthy controls participated in the task. The task consisted of three conditions (sung, spoken, and melodic) each consisting of 20 well-known songs. Participants heard the first half of a phrase that was either sung in its original format (sung condition), spoken (spoken condition), or intoned on the syllable "bum," (melodic condition) and were asked to complete the phrase according to the format in which the stimulus was presented. PWA achieved the highest accuracy in the sung condition, followed by the spoken and then melodic conditions, while controls scored comparably in the sung and spoken condition and much lower in the melodic condition. PWA and controls were better able to access and produce both the melody and lyrics of songs in the sung condition (when both components were presented together), compared to when the melody and lyrics of songs were presented in isolation. Here, melody confers an advantage for word retrieval for PWA, as lyric production is better in a sung context, and these results substantiate the theoretical framework of MIT. Additionally, the present results may be attributed to the integration hypothesis, which postulates that the text and tune of a song are integrated in memory. Interestingly, a subset of the most severe PWA scored higher in the melodic condition relative to the spoken condition, while this pattern was not found for less severe PWA and for controls. Taken together, our results suggest that singing appears to influence PWA when trying to access the lyrics of songs; access to melody is preserved in PWA even while they exhibit profound and diverse language impairments. Findings may have implications for using music as a more widely implemented tool in speech therapy for PWA.


Asunto(s)
Afasia/psicología , Memoria/fisiología , Música , Canto , Habla/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Medición de la Producción del Habla/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...